21 research outputs found

    Visuomotor control, eye movements, and steering : A unified approach for incorporating feedback, feedforward, and internal models

    Get PDF
    The authors present an approach to the coordination of eye movements and locomotion in naturalistic steering tasks. It is based on recent empirical research, in particular, on driver eye movements, that poses challenges for existing accounts of how we visually steer a course. They first analyze how the ideas of feedback and feedforward processes and internal models are treated in control theoretical steering models within vision science and engineering, which share an underlying architecture but have historically developed in very separate ways. The authors then show how these traditions can be naturally (re)integrated with each other and with contemporary neuroscience, to better understand the skill and gaze strategies involved. They then propose a conceptual model that (a) gives a unified account to the coordination of gaze and steering control, (b) incorporates higher-level path planning, and (c) draws on the literature on paired forward and inverse models in predictive control. Although each of these (a–c) has been considered before (also in the context of driving), integrating them into a single framework and the authors’ multiple waypoint identification hypothesis within that framework are novel. The proposed hypothesis is relevant to all forms of visually guided locomotion.Peer reviewe

    Controlling Steering Using Vision

    Get PDF
    The Two-level model is a popular account of how humans use visual information to successfully control steering within road edges. A guidance component uses information from far regions to preview upcoming steering requirements, and a compensatory component uses information from near regions to stabilise position-in-lane. Researchers who have considered the case of driving often treat road edges as the sole informational input for controlling steering, but this approach is not consistent with the notion that the human visual system adaptively uses multiple inputs to maintain robust control of steering. A rich source of information which may also be useful for steering control is optic flow. Chapter 2 demonstrates that optic flow speed is used to control steering even with road edges present. Chapters 3-5 develop a framework to examine how use of flow speed changes depending on the availability of guidance or compensatory road edge information, and demonstrate that use of flow speed increases only when guidance level information (far road edges) is present. Chapters 6-7 go on to examine the contribution of flow direction to controlling steering within road edges, and demonstrate that the use of flow direction appears to be yoked to the presence of compensatory information (near road edges). Together, these experiments demonstrate that the contribution of flow information to controlling steering within road edges can be understood within the context of two-level steering, and show that an approach which emphasise robust control through combining multiple informational inputs is vital if we are to fully understand how the visual-motor system solves the problem of steering along constrained path

    Humans Use Predictive Gaze Strategies to Target Waypoints for Steering

    Get PDF
    A major unresolved question in understanding visually guided locomotion in humans is whether actions are driven solely by the immediately available optical information (model-free online control mechanisms), or whether internal models have a role in anticipating the future path. We designed two experiments to investigate this issue, measuring spontaneous gaze behaviour while steering, and predictive gaze behaviour when future path information was withheld. In Experiment 1 participants (N = 15) steered along a winding path with rich optic flow: gaze patterns were consistent with tracking waypoints on the future path 1–3 s ahead. In Experiment 2, participants (N = 12) followed a path presented only in the form of visual waypoints located on an otherwise featureless ground plane. New waypoints appeared periodically every 0.75 s and predictably 2 s ahead, except in 25% of the cases the waypoint at the expected location was not displayed. In these cases, there were always other visible waypoints for the participant to fixate, yet participants continued to make saccades to the empty, but predictable, waypoint locations (in line with internal models of the future path guiding gaze fixations). This would not be expected based upon existing model-free online steering control models, and strongly points to a need for models of steering control to include mechanisms for predictive gaze control that support anticipatory path following behaviours.A major unresolved question in understanding visually guided locomotion in humans is whether actions are driven solely by the immediately available optical information (model-free online control mechanisms), or whether internal models have a role in anticipating the future path. We designed two experiments to investigate this issue, measuring spontaneous gaze behaviour while steering, and predictive gaze behaviour when future path information was withheld. In Experiment 1 participants (N = 15) steered along a winding path with rich optic flow: gaze patterns were consistent with tracking waypoints on the future path 1–3 s ahead. In Experiment 2, participants (N = 12) followed a path presented only in the form of visual waypoints located on an otherwise featureless ground plane. New waypoints appeared periodically every 0.75 s and predictably 2 s ahead, except in 25% of the cases the waypoint at the expected location was not displayed. In these cases, there were always other visible waypoints for the participant to fixate, yet participants continued to make saccades to the empty, but predictable, waypoint locations (in line with internal models of the future path guiding gaze fixations). This would not be expected based upon existing model-free online steering control models, and strongly points to a need for models of steering control to include mechanisms for predictive gaze control that support anticipatory path following behaviours.Peer reviewe

    Looking at the Road When Driving Around Bends : Influence of Vehicle Automation and Speed

    Get PDF
    When negotiating bends car drivers perform gaze polling: their gaze shifts between guiding fixations (GFs; gaze directed 1–2 s ahead) and look-ahead fixations (LAFs; longer time headway). How might this behavior change in autonomous vehicles where the need for constant active visual guidance is removed? In this driving simulator study, we analyzed this gaze behavior both when the driver was in charge of steering or when steering was delegated to automation, separately for bend approach (straight line) and the entry of the bend (turn), and at various speeds. The analysis of gaze distributions relative to bend sections and driving conditions indicate that visual anticipation (through LAFs) is most prominent before entering the bend. Passive driving increased the proportion of LAFs with a concomitant decrease of GFs, and increased the gaze polling frequency. Gaze polling frequency also increased at higher speeds, in particular during the bend approach when steering was not performed. LAFs encompassed a wide range of eccentricities. To account for this heterogeneity two sub-categories serving distinct information requirements are proposed: mid-eccentricity LAFs could be more useful for anticipatory planning of steering actions, and far-eccentricity LAFs for monitoring potential hazards. The results support the idea that gaze and steering coordination may be strongly impacted in autonomous vehicles.Peer reviewe

    A Framework for Auditable Synthetic Data Generation

    Full text link
    Synthetic data has gained significant momentum thanks to sophisticated machine learning tools that enable the synthesis of high-dimensional datasets. However, many generation techniques do not give the data controller control over what statistical patterns are captured, leading to concerns over privacy protection. While synthetic records are not linked to a particular real-world individual, they can reveal information about users indirectly which may be unacceptable for data owners. There is thus a need to empirically verify the privacy of synthetic data -- a particularly challenging task in high-dimensional data. In this paper we present a general framework for synthetic data generation that gives data controllers full control over which statistical properties the synthetic data ought to preserve, what exact information loss is acceptable, and how to quantify it. The benefits of the approach are that (1) one can generate synthetic data that results in high utility for a given task, while (2) empirically validating that only statistics considered safe by the data curator are used to generate the data. We thus show the potential for synthetic data to be an effective means of releasing confidential data safely, while retaining useful information for analysts

    Humans Use Predictive Gaze Strategies to Target Waypoints for Steering

    Get PDF
    major unresolved question in understanding visually guided locomotion in humans is whether actions are driven solely by the immediately available optical information (model-free online control mechanisms), or whether internal models have a role in anticipating the future path. We designed two experiments to investigate this issue, measuring spontaneous gaze behaviour while steering, and predictive gaze behaviour when future path information was withheld. In Experiment 1 participants (N = 15) steered along a winding path with rich optic flow: gaze patterns were consistent with tracking waypoints on the future path 1–3 s ahead. In Experiment 2, participants (N = 12) followed a path presented only in the form of visual waypoints located on an otherwise featureless ground plane. New waypoints appeared periodically every 0.75 s and predictably 2 s ahead, except in 25% of the cases the waypoint at the expected location was not displayed. In these cases, there were always other visible waypoints for the participant to fixate, yet participants continued to make saccades to the empty, but predictable, waypoint locations (in line with internal models of the future path guiding gaze fixations). This would not be expected based upon existing model-free online steering control models, and strongly points to a need for models of steering control to include mechanisms for predictive gaze control that support anticipatory path following behaviours. Peer reviewed Document type: Articl

    Steering bends and changing lanes: the impact of optic flow and road edges on two point steering control

    Get PDF
    Successful driving involves steering corrections that respond to immediate positional errors whilst also anticipating upcoming changes to the road layout ahead. In popular steering models these tasks are often treated as separate functions using two points: the near region for correcting current errors, and the far region for anticipating future steering requirements. Whilst two-point control models can capture many aspects of driver behaviour, the nature of perceptual inputs to these two ‘points’ remains unclear. Inspired by experiments that solely focused on road-edge information (Land & Horwood, 1995), two-point models have tended to ignore the role of optic flow during steering control. There is recent evidence demonstrating that optic flow should be considered within two-point control steering models (Mole et al., 2016). To examine the impact of optic flow and road edges on two-point steering control we used a driving simulator to selectively and systematically manipulate these components. We removed flow and/or road-edge information from near or far regions of the scene, and examined how behaviours changed when steering along roads where the utility of far-road information varied. Whilst steering behaviours were strongly influenced by the road-edges, there were also clear contributions of optic flow to steering responses. The patterns of steering were not consistent with optic flow simply feeding into two-point control, rather the global optic flow field appeared to support effective steering responses across the time-course of each trajectory

    Laparoscopic motor learning and workspace exploration

    Get PDF
    Background: Laparoscopic surgery requires operators to learn novel complex movement patterns. However, our understanding of how best to train surgeons’ motor skills is inadequate and research is needed to determine optimal laparoscopic training regimes. This difficulty is confounded by variables inherent in surgical practice – e.g. the increasing prevalence of morbidly obese patients presents additional challenges related to restriction of movement due to abdominal wall resistance and reduced intra-abdominal space. The aim of this study was to assess learning of a surgery related task in constrained and unconstrained conditions using a novel system linking a commercially available robotic arm with specialised software creating the novel kinematic assessment tool (Omni-KAT). Methods: We created an experimental tool that records motor performance by linking a commercially available robotic arm with specialised software that presents visual stimuli and objectively measures movement outcome (kinematics). Participants were given the task of generating aiming movements along a horizontal plane to move a visual cursor on a vertical screen. One group received training that constrained movements to the correct plane whilst the other group was unconstrained and could explore the entire ‘action space’. Results: The tool successfully generated the requisite force fields and precisely recorded the aiming movements. Consistent with predictions from structural learning theory, the unconstrained group produced better performance after training as indexed by movement duration (p < .05). Conclusion: The data showed improved performance for participants who explored the entire action space, highlighting the importance of learning the full dynamics of laparoscopic instruments. These findings, alongside the development of the Omni-KAT, open up exciting prospects for better understanding of the learning processes behind surgical training and investigating ways in which learning can be optimised

    Modelling visual-vestibular integration and behavioural adaptation in the driving simulator

    Get PDF
    It is well established that not only vision but also other sensory modalities affect drivers’ control of their vehicles, and that drivers adapt over time to persistent changes in sensory cues (for example in driving simulators), but the mechanisms underlying these behavioural phenomena are poorly understood. Here, we consider the existing literature on how driver steering in slalom tasks is affected by down-scaling of vestibular cues, and propose, for the first time, a computational model of driver behaviour that can, based on neurobiologically plausible mechanisms, explain the empirically observed effects, namely: decreased task performance and increased steering effort during initial exposure, followed by a partial reversal of these effects as task exposure is prolonged. Unexpectedly, the model also reproduced another previously unexplained empirical finding: a local optimum for motion down-scaling, where path-tracking is better than when one-to-one motion cues are available. Overall, our findings suggest that: (1) drivers make direct use of vestibular information as part of determining appropriate steering actions, and (2) motion down-scaling causes a yaw rate underestimation phenomenon, where drivers behave as if the simulated vehicle is rotating more slowly than it is. However, (3) in the slalom task, a certain degree of such underestimation brings a path-tracking performance benefit. Furthermore, (4) behavioural adaptation in simulated slalom driving tasks may occur due to (a) down-weighting of vestibular cues, and/or (b) increased sensitivity in timing and magnitude of steering corrections, but (c) seemingly not in the form of a full compensatory rescaling of the received vestibular input. The analyses presented here provide new insights and hypotheses about simulated driving and simulator design, and the developed models can be used to support research on multisensory integration and behavioural adaptation in both driving and other task domains
    corecore